83 research outputs found

    From Nonlinear Identification to Linear Parameter Varying Models: Benchmark Examples

    Full text link
    Linear parameter-varying (LPV) models form a powerful model class to analyze and control a (nonlinear) system of interest. Identifying a LPV model of a nonlinear system can be challenging due to the difficulty of selecting the scheduling variable(s) a priori, which is quite challenging in case a first principles based understanding of the system is unavailable. This paper presents a systematic LPV embedding approach starting from nonlinear fractional representation models. A nonlinear system is identified first using a nonlinear block-oriented linear fractional representation (LFR) model. This nonlinear LFR model class is embedded into the LPV model class by factorization of the static nonlinear block present in the model. As a result of the factorization a LPV-LFR or a LPV state-space model with an affine dependency results. This approach facilitates the selection of the scheduling variable from a data-driven perspective. Furthermore the estimation is not affected by measurement noise on the scheduling variables, which is often left untreated by LPV model identification methods. The proposed approach is illustrated on two well-established nonlinear modeling benchmark examples

    On the Simulation of Polynomial NARMAX Models

    Get PDF
    In this paper, we show that the common approach for simulation non-linear stochastic models, commonly used in system identification, via setting the noise contributions to zero results in a biased response. We also demonstrate that to achieve unbiased simulation of finite order NARMAX models, in general, we require infinite order simulation models. The main contributions of the paper are two-fold. Firstly, an alternate representation of polynomial NARMAX models, based on Hermite polynomials, is proposed. The proposed representation provides a convenient way to translate a polynomial NARMAX model to a corresponding simulation model by simply setting certain terms to zero. This translation is exact when the simulation model can be written as an NFIR model. Secondly, a parameterized approximation method is proposed to curtail infinite order simulation models to a finite order. The proposed approximation can be viewed as a trade-off between the conventional approach of setting noise contributions to zero and the approach of incorporating the bias introduced by higher-order moments of the noise distribution. Simulation studies are provided to illustrate the utility of the proposed representation and approximation method.Comment: Accepted in IEEE CDC 201

    Comparison of Deep Learning Methods for System Identification

    Get PDF

    Deep subspace encoders for nonlinear system identification

    Get PDF
    Using Artificial Neural Networks (ANN) for nonlinear system identification has proven to be a promising approach, but despite of all recent research efforts, many practical and theoretical problems still remain open. Specifically, noise handling and models, issues of consistency and reliable estimation under minimization of the prediction error are the most severe problems. The latter comes with numerous practical challenges such as explosion of the computational cost in terms of the number of data samples and the occurrence of instabilities during optimization. In this paper, we aim to overcome these issues by proposing a method which uses a truncated prediction loss and a subspace encoder for state estimation. The truncated prediction loss is computed by selecting multiple truncated subsections from the time series and computing the average prediction loss. To obtain a computationally efficient estimation method that minimizes the truncated prediction loss, a subspace encoder represented by an artificial neural network is introduced. This encoder aims to approximate the state reconstructability map of the estimated model to provide an initial state for each truncated subsection given past inputs and outputs. By theoretical analysis, we show that, under mild conditions, the proposed method is locally consistent, increases optimization stability, and achieves increased data efficiency by allowing for overlap between the subsections. Lastly, we provide practical insights and user guidelines employing a numerical example and state-of-the-art benchmark results

    Nonlinear state-space identification using deep encoder networks

    Get PDF
    Nonlinear state-space identification for dynamical systems is most often performed by minimizing the simulation error to reduce the effect of model errors. This optimization problem becomes computationally expensive for large datasets. Moreover, the problem is also strongly non-convex, often leading to sub-optimal parameter estimates. This paper introduces a method that approximates the simulation loss by splitting the data set into multiple independent sections similar to the multiple shooting method. This splitting operation allows for the use of stochastic gradient optimization methods which scale well with data set size and has a smoothing effect on the non-convex cost function. The main contribution of this paper is the introduction of an encoder function to estimate the initial state at the start of each section. The encoder function estimates the initial states using a feed-forward neural network starting from historical input and output samples. The efficiency and performance of the proposed state-space encoder method is illustrated on two well-known benchmarks where, for instance, the method achieves the lowest known simulation error on the Wiener--Hammerstein benchmark

    Supplementary Material:On Automated Multi-objective Identification Using Grammar-based Genetic Programming

    Get PDF
    This document contains the supplementary material for the contribution "On Automated Multi-objective Identification Using Grammar-based Genetic Programming"
    corecore